20225421 권다희 HW4
0. Load Image with dcraw
To load linear raw image, we need to convert them into readable linear 16-bit image.
- convert into linear 16-bit .tiff image (-T -6 option)
- do white balancing (-w option)
- high quality interpolation (-q 3 option)
- sRGB (-o 1 option)
dc = readraw;
readraw: installed ReadRaw as loader for RAW camera images.
im1_r = imread(dc, '../exposure_stack/exposure1.nef', '-w -T -6 -q 3 -o 1');
im2_r = imread(dc, '../exposure_stack/exposure2.nef', '-w -T -6 -q 3 -o 1');
im3_r = imread(dc, '../exposure_stack/exposure3.nef', '-w -T -6 -q 3 -o 1');
im4_r = imread(dc, '../exposure_stack/exposure4.nef', '-w -T -6 -q 3 -o 1');
im5_r = imread(dc, '../exposure_stack/exposure5.nef', '-w -T -6 -q 3 -o 1');
im6_r = imread(dc, '../exposure_stack/exposure6.nef', '-w -T -6 -q 3 -o 1');
im7_r = imread(dc, '../exposure_stack/exposure7.nef', '-w -T -6 -q 3 -o 1');
im8_r = imread(dc, '../exposure_stack/exposure8.nef', '-w -T -6 -q 3 -o 1');
im9_r = imread(dc, '../exposure_stack/exposure9.nef', '-w -T -6 -q 3 -o 1');
im10_r = imread(dc, '../exposure_stack/exposure10.nef', '-w -T -6 -q 3 -o 1');
im11_r = imread(dc, '../exposure_stack/exposure11.nef', '-w -T -6 -q 3 -o 1');
im12_r = imread(dc, '../exposure_stack/exposure12.nef', '-w -T -6 -q 3 -o 1');
im13_r = imread(dc, '../exposure_stack/exposure13.nef', '-w -T -6 -q 3 -o 1');
im14_r = imread(dc, '../exposure_stack/exposure14.nef', '-w -T -6 -q 3 -o 1');
im15_r = imread(dc, '../exposure_stack/exposure15.nef', '-w -T -6 -q 3 -o 1');
im16_r = imread(dc, '../exposure_stack/exposure16.nef', '-w -T -6 -q 3 -o 1');
imgs_r = {im1_r,im2_r,im3_r,im4_r,im5_r,im6_r,im7_r, im8_r,im9_r,im10_r,im11_r,im12_r,im13_r,im14_r,im15_r,im16_r};
0. Load non-linear rendered images
I also load non-linear images (.jpg files)
im1 = imread('../exposure_stack/exposure1.jpg');
im2 = imread('../exposure_stack/exposure2.jpg');
im3 = imread('../exposure_stack/exposure3.jpg');
im4 = imread('../exposure_stack/exposure4.jpg');
im5 = imread('../exposure_stack/exposure5.jpg');
im6 = imread('../exposure_stack/exposure6.jpg');
im7 = imread('../exposure_stack/exposure7.jpg');
im8 = imread('../exposure_stack/exposure8.jpg');
im9 = imread('../exposure_stack/exposure9.jpg');
im10 = imread('../exposure_stack/exposure10.jpg');
im11 = imread('../exposure_stack/exposure11.jpg');
im12 = imread('../exposure_stack/exposure12.jpg');
im13 = imread('../exposure_stack/exposure13.jpg');
im14 = imread('../exposure_stack/exposure14.jpg');
im15 = imread('../exposure_stack/exposure15.jpg');
im16 = imread('../exposure_stack/exposure16.jpg');
imgs = {im1,im2,im3,im4,im5,im6,im7,im8,im9,im10,im11,im12,im13,im14,im15,im16};
1.1 Linearize - with uniform weight
For High Dynamic Range imaging, we need linear images. Therefore, I linearize the .jpg images. For linearization, I solve the linear equation introduced in the lecture.
- I set the lambda as 150 which is the hyper-parameter for smoothness constraint.
- There are 3 types of weight filter that I can use.
- The uniform filter use the every intensity as same amount. The tent and gaussian filter rely more on well-exposed pixels than on under-exposed or over-exposed pixels.
- After solving the equation, we can get the function g, which maps the pixel value to the linearized value.
- The shape of function g (of each weight scheme) is in below (S-shaped)
- The implementation detail that I refer from the original paper is that "sampling some amount of pixel values that is enough to cover whole pixel intensity distribution."
- I use only use random sampled 300 pixels and it is sufficient to get the proper function g.
w = weighten('unif','jpg');
B = log([1/2048,1/1024,1/512,1/256,1/128,1/64,1/32,1/16,1/8,1/4,1/2,1,2,4,8,16]);
Z = image_stack(imgs,c,300);
[g_c, lE_c] = gsolve(Z,B,l,w);
title('The curvature of function g with uniform weight');
- Convert non-linear images into linear ones
lin_imgs{k}(:,:,c) = exp(g_tmp(imgs{k}(:,:,c)+1));
1.1 Linearize - with tent weight
w = weighten('tent','jpg');
B = log([1/2048,1/1024,1/512,1/256,1/128,1/64,1/32,1/16,1/8,1/4,1/2,1,2,4,8,16]);
Z = image_stack(imgs,c,300);
[g_c, lE_c] = gsolve(Z,B,l,w);
title('The curvature of function g with tent weight');
- Convert non-linear images into linear ones
lin_imgs_t{k}(:,:,c) = exp(g_tmp(imgs{k}(:,:,c)+1));
1.1 Linearize - with Gaussian filter
w = exp(-4.*(x-0.5).^2/(0.5.^2));
B = log([1/2048,1/1024,1/512,1/256,1/128,1/64,1/32,1/16,1/8,1/4,1/2,1,2,4,8,16]);
Z = image_stack(imgs,c,300);
[g_c, lE_c] = gsolve(Z,B,l,w);
title('The curvature of function g with gaussain weight');
- Convert non-linear images into linear ones
lin_imgs_g{k}(:,:,c) = exp(g_tmp(imgs{k}(:,:,c)+1));
Merge Exposure Stack into HDR Image
With the linearized images, I made HDR images. There are 3 options for merging exposure stack.
- Domain: Log / Linear
- Image: Raw/ Rendered
- Weight: Uniform / Tent / Gaussian (But later, I don't use the Gaussian weight)
1. uniform weight filter, rendered images with linear domain
w = weighten('unif','jpg');
unif_lin_rendered = zeros(size(im1,1),size(im1,2),size(im1,3));
nom = zeros(size(im1,1),size(im1,2));
denom = zeros(size(im1,1),size(im1,2));
nom = nom + w(imgs{k}(:,:,c)+1).*im2double(lin_imgs{k}(:,:,c))./exp(B(k));
denom = denom + w(imgs{k}(:,:,c)+1);
unif_lin_rendered(:,:,c) = im2double(nom./denom);
hdrwrite(unif_lin_rendered,'../results/unif_lin_rendered.hdr');
2. uniform weight filter, rendered images with log domain
w = weighten('unif','jpg');
unif_log_rendered = zeros(size(im1,1),size(im1,2),size(im1,3));
nom = zeros(size(imgs{k},1),size(imgs{k},2));
denom = zeros(size(imgs{k},1),size(imgs{k},2));
nom = nom + w(imgs{k}(:,:,c)+1).*(log(im2double(lin_imgs{k}(:,:,c))) - B(k));
denom = denom + w(imgs{k}(:,:,c)+1);
unif_log_rendered(:,:,c) = exp(nom./denom);
hdrwrite(unif_log_rendered,'../results/unif_log_rendered.hdr');
3. uniform weight filter, raw images with linear domain
w = weighten('unif','raw');
unif_lin_raw = zeros(size(im1_r,1),size(im1_r,2),size(im1_r,3));
nom = zeros(size(imgs_r{k},1),size(imgs_r{k},2));
denom = zeros(size(imgs_r{k},1),size(imgs_r{k},2));
nom = nom + w(imgs_r{k}(:,:,c)+1).*im2double(imgs_r{k}(:,:,c))./exp(B(k));
denom = denom + w(imgs_r{k}(:,:,c)+1);
unif_lin_raw(:,:,c) = im2double(nom./denom);
hdrwrite(unif_lin_raw,'../results/unif_lin_raw.hdr');
4. uniform weight filter, raw images with log domain
w = weighten('unif','raw');
unif_log_raw = zeros(size(im1_r,1),size(im1_r,2),size(im1_r,3));
nom = zeros(size(imgs_r{k},1),size(imgs_r{k},2));
denom = zeros(size(imgs_r{k},1),size(imgs_r{k},2));
nom = nom + w(imgs_r{k}(:,:,c)+1).*(log(im2double(imgs_r{k}(:,:,c))) - B(k));
denom = denom + w(imgs_r{k}(:,:,c)+1);
unif_log_raw(:,:,c) = exp(nom./denom);
hdrwrite(unif_log_raw,'../results/unif_log_raw.hdr');
5. tent weight filter, rendered images with linear domain
w = weighten('tent','jpg');
tent_lin_rendered = zeros(size(im1,1),size(im1,2),size(im1,3));
nom = zeros(size(im1,1),size(im1,2));
denom = zeros(size(im1,1),size(im1,2));
nom = nom + w(imgs{k}(:,:,c)+1).*im2double(lin_imgs_t{k}(:,:,c))./exp(B(k));
denom = denom + w(imgs{k}(:,:,c)+1);
tent_lin_rendered(:,:,c) = im2double(nom./denom);
hdrwrite(tent_lin_rendered,'../results/tent_lin_rendered.hdr');
6. tent weight filter, rendered images with log domain
w = weighten('tent','jpg');
tent_log_rendered = zeros(size(im1,1),size(im1,2),size(im1,3));
B = log([1/2048,1/1024,1/512,1/256,1/128,1/64,1/32,1/16,1/8,1/4,1/2,1,2,4,8,16]);
nom = zeros(size(imgs{k},1),size(imgs{k},2));
denom = zeros(size(imgs{k},1),size(imgs{k},2));
nom = nom + w(imgs{k}(:,:,c)+1).*(log(im2double(lin_imgs_t{k}(:,:,c))) - B(k));
denom = denom + w(imgs{k}(:,:,c)+1);
tent_log_rendered(:,:,c) = exp(nom./denom);
hdrwrite(tent_log_rendered,'../results/tent_log_rendered.hdr');
7. tent weight filter, raw images with linear domain
w = weighten('tent','raw');
tent_lin_raw = zeros(size(im1_r,1),size(im1_r,2),size(im1_r,3));
nom = zeros(size(imgs_r{k},1),size(imgs_r{k},2));
denom = zeros(size(imgs_r{k},1),size(imgs_r{k},2));
nom = nom + w(imgs_r{k}(:,:,c)+1).*im2double(imgs_r{k}(:,:,c))./exp(B(k));
denom = denom + w(imgs_r{k}(:,:,c)+1);
tent_lin_raw(:,:,c) = im2double(nom./denom);
hdrwrite(tent_lin_raw,'../results/tent_lin_raw.hdr');
8. tent weight filter, raw images with log domain
w = weighten('tent','raw');
tent_log_raw = zeros(size(im1_r,1),size(im1_r,2),size(im1_r,3));
nom = zeros(size(imgs_r{k},1),size(imgs_r{k},2));
denom = zeros(size(imgs_r{k},1),size(imgs_r{k},2));
nom = nom + w(imgs_r{k}(:,:,c)+1).*(log(im2double(imgs_r{k}(:,:,c))) - B(k));
denom = denom + w(imgs_r{k}(:,:,c)+1);
tent_log_raw(:,:,c) = exp(nom./denom);
hdrwrite(im2double(tent_log_raw),'../results/tent_log_raw.hdr');
9. gaussian weight filter, rendered images with linear domain
w = exp(-4.*(x-0.5).^2/(0.5.^2));
gauss_lin_rendered = zeros(size(im1,1),size(im1,2),size(im1,3));
nom = zeros(size(im1,1),size(im1,2));
denom = zeros(size(im1,1),size(im1,2));
nom = nom + w(imgs{k}(:,:,c)+1).*im2double(lin_imgs_g{k}(:,:,c))./exp(B(k));
denom = denom + w(imgs{k}(:,:,c)+1);
gauss_lin_rendered(:,:,c) = im2double(nom./denom);
hdrwrite(gauss_lin_rendered,'../results/gauss_lin_rendered.hdr');
10. gaussian weight filter, rendered images with log domain
w = exp(-4.*(x-0.5).^2/(0.5.^2));
gauss_log_rendered = zeros(size(im1,1),size(im1,2),size(im1,3));
nom = zeros(size(imgs{k},1),size(imgs{k},2));
denom = zeros(size(imgs{k},1),size(imgs{k},2));
nom = nom + w(imgs{k}(:,:,c)+1).*(log(im2double(lin_imgs_g{k}(:,:,c))) - B(k));
denom = denom + w(imgs{k}(:,:,c)+1);
gauss_log_rendered(:,:,c) = exp(nom./denom);
hdrwrite(gauss_log_rendered,'../results/gauss_log_rendered.hdr');
11. gaussian weight filter, raw images with linear domain
w = exp(-4.*(x-0.5).^2/(0.5.^2));
gauss_lin_raw = zeros(size(im1_r,1),size(im1_r,2),size(im1_r,3));
nom = zeros(size(imgs_r{k},1),size(imgs_r{k},2));
denom = zeros(size(imgs_r{k},1),size(imgs_r{k},2));
nom = nom + w(imgs_r{k}(:,:,c)+1).*im2double(imgs_r{k}(:,:,c))./exp(B(k));
denom = denom + w(imgs_r{k}(:,:,c)+1);
gauss_lin_raw(:,:,c) = im2double(nom./denom);
hdrwrite(gauss_lin_raw,'../results/gauss_lin_raw.hdr');
12. gaussian weight filter, raw images with log domain
w = exp(-4.*(x-0.5).^2/(0.5.^2));
gauss_log_raw = zeros(size(im1_r,1),size(im1_r,2),size(im1_r,3));
nom = zeros(size(imgs_r{k},1),size(imgs_r{k},2));
denom = zeros(size(imgs_r{k},1),size(imgs_r{k},2));
nom = nom + w(imgs_r{k}(:,:,c)+1).*(log(im2double(imgs_r{k}(:,:,c))) - B(k));
denom = denom + w(imgs_r{k}(:,:,c)+1);
gauss_log_raw(:,:,c) = exp(nom./denom);
hdrwrite(gauss_log_raw,'../results/gauss_log_raw.hdr');
- The result of HDR images (with each options) are as below:
- Qualitatively, "Uniform weight /Log domain / Rendered imgs" and "Tent weight /Log domain / Rendered imgs" performs well.
1.3 Evaluation
- To quantitatively evaluate the HDR images, I check their linearity.
- I use the color checker on the image, especially the neutral 6 patches.
- I crop each of the neutral patches.
- For every HDR images (8), I do linear regression.
- (x=average luminance value of HDR images, y= that of original raw images(actual luminance))
- Then I compute the sum of squar error (which is used in LSE method).
- Also, I plot the log(average luminance) of each HDR images.
patch_actual_avg = zeros(6,1);
min_x = min(round(x(:,i))); max_x = max(round(x(:,i))); min_y = min(round(y(:,i))); max_y = max(round(y(:,i)));
patch = tent_lin_raw(min_y:max_y, min_x:max_x,:);
patch_actual = im16_r(min_y:max_y, min_x:max_x,:);
patch_xyz = rgb2xyz(patch,'ColorSpace','linear-rgb');
patch_xyz_actual = rgb2xyz(patch_actual,'ColorSpace','linear-rgb');
patch_avg(i) = mean(patch_xyz(:,:,2),'all');
patch_actual_avg(i) = mean(patch_xyz_actual(:,:,2),'all');
lm_model = fitlm(log(patch_avg),log(patch_actual_avg));
title('Linear regression to the log(avg luminance)')
- The linear regression is well fitted.
SSE = sum(lm_model.Residuals{:,1}.^2)
- This is the SSE value of each HDR images.
- Like the qualitative evaluation, Rendered images on the logarithm domain HDR images have small SSE.
- Raw image/ Tent weight/ Linear domain HDR image also has small SSE, but they don't have good performance on qualitative evaluation. I guess the reason of good result of the image is due to the dependent variable of linear reegression. (I use the raw image as the dependent variable, which is also used for the merging HDR image)
patch_avg_tliraw = patch_avg;
p1 = plot(log(patch_avg_tlor));
p2 = plot(log(patch_avg_tlir));
p3 = plot(log(patch_avg_ulir));
p4 = plot(log(patch_avg_ulor));
p5 = plot(log(patch_avg_tloraw));
p6 = plot(log(patch_avg_tliraw));
p7 = plot(log(patch_avg_uliraw));
p8 = plot(log(patch_avg_uloraw));
lgd = legend([p1,p4,p6,p8],{'tent/log/rendered','uniform/log/rendered','tent/linear/raw','uniform/log/raw'});
title('Logarithmic plot of average luminances for various HDR imgs')
- I make a logarithmic plot comparing the six average luminances for the various HDR images.
- Most of the HDR images show the straight line.
By considering these overall evaluations, I will use the "tent weight/logarithm merging/rendered image" for the following tonemapping.
2.1 Photographic Tonemapping
- Roughly speaking, Phtographic Tonemapping transforms HDR images into HDR/(1+HDR). This non-linear scaling can bring the pixels within the range (almost 0-1) and leave the dark areas alone.
- There are K, B parameters. K refers the brightness of the image and B represents the contrast of the image.
- Empirically I find that the B=0.5/K=0.8 is the best option.
I do tonemapping on both the RGB color space and the XYZ luminance space (only Y).
- For color channel
I_hdr = zeros(size(im1,1),size(im1,2),size(im1,3));
I_tonemap = zeros(size(im1,1),size(im1,2),size(im1,3));
I_m = exp(mean(log(tent_log_rendered(:,:,c)+0.00001),'all'));
I_hdr(:,:,c) = K/I_m*tent_log_rendered(:,:,c);
I_white = B * max(I_hdr(:,:,c),[],'all');
I_tonemap(:,:,c) = I_hdr(:,:,c).*(1+I_hdr(:,:,c)/(I_white.^2))./(1+I_hdr(:,:,c));
I_tonemap = im2uint8(I_tonemap);
imwrite(I_tonemap,'../results/I_tonemap_rgb.jpg');
- These are the results for various K, B.
- For luminance channel Y
xyz = rgb2xyz(tent_log_rendered,'Colorspace','linear-rgb');
xyY = zeros(size(im1,1),size(im1,2),size(im1,3));
xyY(:,:,1) = xyz(:,:,1)./(xyz(:,:,1)+xyz(:,:,2)+xyz(:,:,3));
xyY(:,:,2) = xyz(:,:,2)./(xyz(:,:,1)+xyz(:,:,2)+xyz(:,:,3));
I_hdr = zeros(size(im1,1),size(im1,2));
I_m = exp(mean(log(xyY(:,:,3)+0.00001),'all'));
I_hdr = K/I_m*xyY(:,:,3);
I_white = B * max(I_hdr,[],'all');
I_tonemap = I_hdr.*(1+I_hdr/(I_white.^2))./(1+I_hdr);
I_tonemap_xyz = zeros(size(im1,1),size(im1,2),size(im1,3));
I_tonemap_xyz(:,:,1) = xyY(:,:,1).*I_tonemap./xyY(:,:,2);
I_tonemap_xyz(:,:,2) = I_tonemap;
I_tonemap_xyz(:,:,3) = I_tonemap.*(1-xyY(:,:,1)-xyY(:,:,2))./xyY(:,:,2);
I_tonemap_rgb = xyz2rgb(I_tonemap_xyz);
I_tonemap = im2uint8(I_tonemap_rgb);
imwrite(I_tonemap,'../results/I_tonemap_l.jpg');
- These are the results for various K, B.
The results of the color space are generally blue, and the results of luminescence space are yellow. The larger the K value, the brighter it becomes (the better for this image), and the larger the B value, the greater the contrast (the smaller the influence of B in this image).
With the Photographic Tonemapping algorithm, I like "K=0.8/B=0.5 on RGB space" the most.
2.2 Tonemapping using bilateral filtering
- This is the tonemapping algorithm utilizing bilateral filtering. Bilateral filtering is the smoothing algorithm while preserving the edge information.
- Therefore, if we extract the luminance value and smoothing the luminance with Bilateral filter, then we can do tonemapping while preserving the details and color information.
- I also conduct tonemapping on both RGB channel and the luminance space.
- I set the sigma_r as 0.4 (I refer the original paper) and the window 10 (I empirically find the option)
- For color channel
Lij = log(tent_log_rendered);
B_filter = zeros(size(im2,1),size(im2,2),size(im2,3));
Lij_nom = zeros(size(im2,1),size(im2,2),size(im2,3));
Lij_nom(:,:,c) = rescale(Lij(:,:,c));
B_filter = bfilter2(Lij_nom,w,sigma_r);
B_filter_denom = B_filter.*(max(B_filter,[],'all')-min(B_filter,[],'all'))+min(B_filter,[],'all');
Dij = Lij - B_filter_denom;
B_filter2 = 2 .* (B_filter_denom - max(B_filter_denom,[],'all'));
I_tonemap_filter = im2uint8(exp(B_filter2 + Dij));
imwrite(I_tonemap_filter,'../results/I_tonemap_filter_c_2.jpg');
- These are the results of various S. If the S goes up, the image has the high contrast. If the S goes down, the edge informations are lost.
- I think S=2 is the best option.
- For the luminance
xyz = rgb2xyz(tent_log_rendered,'Colorspace','linear-rgb');
xyY = zeros(size(im1,1),size(im1,2),size(im1,3));
xyY(:,:,1) = xyz(:,:,1)./(xyz(:,:,1)+xyz(:,:,2)+xyz(:,:,3));
xyY(:,:,2) = xyz(:,:,2)./(xyz(:,:,1)+xyz(:,:,2)+xyz(:,:,3));
B_filter = zeros(size(im2,1),size(im2,2));
Lij_nom = zeros(size(im2,1),size(im2,2),size(im2,3));
Lij_nom(:,:,c) = rescale(Lij(:,:,c));
B_filter = bfilter2(Lij_nom(:,:,3),w,sigma_r);
B_filter_denom = B_filter.*(max(B_filter,[],'all')-min(B_filter,[],'all'))+min(B_filter,[],'all');
Dij = Lij(:,:,3) - B_filter_denom;
B_filter2 = 2 .* (B_filter_denom - max(B_filter_denom,[],'all'));
I_tonemap_filter = exp(B_filter2 + Dij);
I_tonemap_xyz = zeros(size(im1,1),size(im1,2),size(im1,3));
I_tonemap_xyz(:,:,1) = xyY(:,:,1).*I_tonemap_filter./xyY(:,:,2);
I_tonemap_xyz(:,:,2) = I_tonemap_filter;
I_tonemap_xyz(:,:,3) = I_tonemap_filter.*(1-xyY(:,:,1)-xyY(:,:,2))./xyY(:,:,2);
I_tonemap_rgb = im2uint8(xyz2rgb(I_tonemap_xyz));
imwrite(I_tonemap_rgb,'../results/I_tonemap_filter_l_2.jpg');
- These are the results of various S. If the S goes up, the image has the high contrast. If the S goes down, the edge informations are lost.
- I think S=2 is the best option.
The color space has a strong contrast overall, so the details of the dark part are lost, and the luminance space is not. The image contrast varies depending on the S value, and the result is the best when S=2. However, considering that the result saved as an hdr image and the result converted to uint8 image are quite different, scaling seems to have not been done well.
In conclusion, luminance space with S=2 has the best result.
Complare with the Photographic Tonemapping algorithm, I like the tonemapping with bilateral filter on luminance space (S=2) the most. It is because they quite well preserve the color scale, while maintains the details.
3. Bonus: Implement a different gradient-domain processing
- This method attenuate the gradient values on the luminance Y space.
- Firstly, they construct the gaussian pyramid and the get the gradient values of each levels of pyramid.
- Then they compute the attenuate function Phi iteratively.
- After that, they estimate the tonemapped intensity values by solving poisson equation.
1. Convert to the XYZ space and extract the Y luminance values.
xyz = rgb2xyz(tent_log_rendered,'Colorspace','linear-rgb');
H = imresize(log(xyz(:,:,2)),0.1);
2. Construct the Gaussian pyramid
H_pyramid = genpyramid(H,3);
3. Compute the gradients on each levels of pyramid
H_pyramid_grad_x = cell(1,4);
H_pyramid_grad_y = cell(1,4);
H_pyramid_grad_x{k} = zeros(size(H_pyramid{k},1)-1,size(H_pyramid{k},2));
for i = 1:size(H_pyramid{k},1)-1
for j = 1:size(H_pyramid{k},2)
H_pyramid_grad_x{k}(i,j) = H_pyramid{k}(i+1,j)-H_pyramid{k}(i,j);
H_pyramid_grad_y{k} = zeros(size(H_pyramid{k},1),size(H_pyramid{k},2)-1);
for i = 1:size(H_pyramid{k},1)
for j = 1:size(H_pyramid{k},2)-1
H_pyramid_grad_y{k}(i,j) = H_pyramid{k}(i,j+1)-H_pyramid{k}(i,j);
4. Compute the attenuation function Phi iteratively
alpha_x = mean(sqrt((H_pyramid_grad_x{5-i}).^2),'all').*0.1;
alpha_y = mean(sqrt((H_pyramid_grad_y{5-i}).^2),'all').*0.1;
factor_x = (alpha_x ./sqrt((H_pyramid_grad_x{5-i}).^2)) .* (sqrt((H_pyramid_grad_x{5-i}).^2)/alpha_x).^beta;
factor_y = (alpha_y ./sqrt((H_pyramid_grad_y{5-i}).^2)) .* (sqrt((H_pyramid_grad_y{5-i}).^2)/alpha_y).^beta;
Pi_x{5-i} = imresize(Pi_x{6-i},[size(factor_x,1),size(factor_x,2)],'bilinear').* factor_x;
Pi_y{5-i} = imresize(Pi_y{6-i},[size(factor_y,1),size(factor_y,2)],'bilinear').*factor_y;
5. Compute the G which represents the attenuated derivates
G_x = H_pyramid_grad_x{1}.*Pi_x{1};
G_y = H_pyramid_grad_y{1}.*Pi_y{1};
6. Solving Poisson equation to get reconstruced pixel intensity.
im2var(1:imh*imw) = 1:imh*imw;
A = sparse([], [], [], ncol, nrow, 0);
b(e) = G_x(i,j) + G_y(i,j) - G_x(i-1,j) - G_y(i,j-1);
b(e) = G_x(i,j) + G_y(i,j) - G_y(i,j-1);
b(e) = G_y(i,j) - G_x(i-1,j) - G_y(i,j-1);
b(e) = G_x(i,j) + G_y(i,j) - G_x(i-1,j);
b(e) = G_x(i,j) - G_x(i-1,j) - G_y(i,j-1);
b(e) = G_x(1,1) + G_y(1,1);
A(e, im2var(1,imw)) = -4;
A(e, im2var(1,imw-1)) = 1;
b(e) = G_x(1,imw) - G_y(1,imw-1);
A(e, im2var(imh,1)) = -4;
A(e, im2var(imh-1,1)) = 1;
b(e) = G_y(imh,1) - G_x(imh-1,1);
A(e, im2var(imh,imw)) = -4;
A(e, im2var(imh-1,imw)) = 1;
A(e, im2var(imh,imw-1)) = 1;
b(e) = - G_x(imh-1,imw) - G_y(imh,imw-1);
7. Convert to the RGB color space
xyz(:,:,2) = exp(imresize(output,10));
bonus_tmp = xyz2rgb(xyz);
function [output] = weighten(type,im_type)
x = exp(-4*(x-0.5).^2/(0.5.^2));
function [Z] = image_stack(images,c,s)
length = size(images{1},1);
width = size(images{1},2);
Z = zeros(s,size(images,2));
idx_pool = 1:length*width;
sample_set = randsample(idx_pool,s);
tmp = reshape(images{i}(:,:,c),[length*width,1]);
Z(:,i) = tmp(sample_set);
function [g, lE] = gsolve(Z,B,l,w)
A = sparse(size(Z,2)*size(Z,1)+n+1,n+size(Z,1));
function [G_pyramid] = genpyramid(img,level)
G_pyramid = cell(1,level+1);
G_pyramid{i} = G_filter(G_pyramid{i-1});
function [filter_img] = G_filter(img)
g_filter = [1 2 1; 2 4 2; 1 2 1]/16;
temp_img = imfilter(img,g_filter,"replicate","same");
filter_img = zeros(ceil(size(temp_img,1)/2),ceil(size(temp_img,2)/2),size(temp_img,3));
filter_img = ttemp_img(1:2:size(temp_img,1),1:2:size(temp_img,2));